- Understand key timeseries concepts and features
- See example timeseries that we'll use on the course
- Understand the types of analysis that we want to do, and why
“With one dimension marching along to the regular rhythm of seconds, minutes, hours, days, weeks, months, years, centuries, or millennia, the natural ordering of the time scale gives this design a strength and efficiency of interpretation found in no other graphic arrangement.”
Edward R. Tufte The Visual Display of Quantitative Information p. 28
Tenth or eleventh century time series showing the position of the planets with time. http://euclid.psych.yorku.ca/SCS/Gallery/milestone/sec2.html
Diagram showing the distance of the planets to the earth in 1732, also showing a complete lunar eclipse and a partial solar eclipse in that year Nicolaas Kruik 1678 - 1754 Dutch Astronomer & Meteorologist
A graph of solar warming vs. lattitude.
Johann Heinrich Lambert 1728 - 1777
Willaim Playfair's trade-balance time-series chart, published in his Commercial and Political Atlas, 1786
What do we use time series methods for? Often, we are trying to do at least one of the following:
Monthly NH sea ice Anomaly from 1978 to present.
plot(ldeaths)
#with(ldeaths,plot(Age,d18o,type='l'))
Monthly Central England Temperature (CET) from 1659
D18O? Global financial crash? Regime changes
An essay towards solving a problem on the doctrine of chances (1763)
\[P(A|B) = \frac{P(B|A) P(A)}{P(B)}\]
Bayes' theorem can be written in words as:
\[\mbox{posterior is proportional to likelihood times prior}\] … or … \[\mbox{posterior} \propto \mbox{likelihood} \times \mbox{prior}\]
Each of the three terms posterior, likelihood, and prior are probability distributions (pdfs).
In a Bayesian model, every item of interest is either data (which we will write as \(x\)) or parameters (which we will write as \(\theta\)). Often the parameters are divided up into those of interest, and other nuisance parameters
Bayes' equation is usually written mathematically as: \[p(\theta|x) \propto p(x|\theta) \times p(\theta)\] or, more fully: \[p(\theta|x) = \frac{p(x|\theta) \times p(\theta)}{p(x)}\]
If we assume that the calls she hears are normally distributed then \(x\) follows a normal distribution with mean \(\theta\) and standard deviation 0.8s, written \(x|\theta \sim N(\theta,0.8^2)\). The prior distribution is \(\theta \sim N(2.3,0.5^2)\).
Note: posterior mean is 2.52 seconds and standard deviation is 0.42 seconds.
Code used to produce previous plot:
# Create grid for theta
theta = seq(0,6,length=100)
# Evalutate prior, likelihood and posterior
prior = dnorm(theta,mean=2.3,sd=0.5)
likelihood = dnorm(3.1,mean=theta,sd=0.8)
posterior = prior*likelihood
# Produce plot
plot(theta,likelihood/sum(likelihood),type='l',
ylab='Probability',ylim=c(0,0.06))
lines(theta,prior/sum(prior),col='red')
lines(theta,posterior/sum(posterior),col='blue')
legend('topright',legend=c('Likelihood','Prior',
'Posterior'),
col=c('black','red','blue'),lty=1)
The Bayesian approach has numerous advantages: